Using Naming Strategies to Make Massively Parallel Systems Work
نویسنده
چکیده
In order to handle massively parallel systems and make them usable, an adaptive, application-oriented operating system is required. The application-orientedness is represented by the family concept of parallel operating systems. Incremental loading of operating system services supports the family character by automatically extending the system's active object structure when it is necessary. This way, also the switching between diierent operating system family members may be realized. A new active object will be incrementally loaded if its invocation fails because it does not yet exist. This is noticed during object binding while using the naming services. The use of the naming system is exploited and extended to get a ex-ible and conngurable mechanism for triggering incre-mental loading. This mechanism is built by the freely deenable naming strategies and exceptions which result again in a family, namely a family of naming services .
منابع مشابه
Molecular simulation of complex systems using massively parallel supercomputers
Massively parallel supercomputers, such as the 150 Gigaflop Intel Paragons located at Oak Ridge National Laboratory and Sandia National Laboratories, make possible molecular simulation of systems of unprecedented complexity and realism. We describe some of the issues related to efficient implementation of molecular dynamics and Monte Carlo simulations on massively parallel supercomputers. The a...
متن کاملRecursive least-squares using a hybrid Householder algorithm on massively parallel SIMD systems
Within the context of recursive least-squares, the implementation of a Householder algorithm for block updating the QR decomposition, on massively parallel SIMD systems, is considered. Initially, two implementations based on dierent mapping strategies for distributing the data matrices over the processing elements of the parallel computer are investigated. Timing models show that neither of th...
متن کاملExperiments with Dataaow on a General-purpose Parallel Computer
The MIT J-Machine 2], a massively-parallel computer, is an experiment in providing general-purpose mechanisms for communication, synchronization, and naming that will support a wide variety of parallel models of comptuation. We have developed two experimental dataaow programming systems for the J-Machine. For the rst system, we adapted Papadopoulos' explicit token store 12] to implement static ...
متن کاملExperiences Implementing Dataflow On
| The MIT J-Machine 3], a massively-parallel computer, is an experiment in providing general-purpose mechanisms for communication, synchronization, and naming that will support a wide variety of parallel models of computation. We have developed two experimental data-ow programming systems for the J-Machine. For the rst system, we adapted Papadopoulos' explicit token store 10] to implement stati...
متن کاملThe IEEE International Symposium on DEFECT and FAULT TOLERANCE in VLSI SYSTEMS Self-Reconfigurable Mesh Array System on FPGA
Massively parallel computers consisting of thousands of processing elements are expected to be high-performance computers in the next decade. One of the major issues in designing massively parallel computers is the reconfiguration strategy in order to provide an efficient fault tolerance mechanism to avoid defective processors in such large scale systems. This paper develops a self-reconfigurab...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Scientific Programming
دوره 3 شماره
صفحات -
تاریخ انتشار 1994